概率时间序列预测在许多应用领域至关重要,例如零售,电子商务,金融或生物学。随着大量数据的增加,已经提出了许多神经架构为此问题。特别是,基于变压器的方法实现了现实世界基准的最先进的性能。然而,这些方法需要了解大量参数,这对培训此类模型的计算资源施加了高的内存要求。为了解决这个问题,我们介绍了一种新颖的双向时间卷积网络(Bitcn),该网络(Bitcn)需要比公共变换器的方法更少的参数较少的阶数。我们的模型结合了两个时间卷积网络(TCN):第一个网络编码了时间序列的未来协变量,而第二网络编码过往观察和协变量。我们通过这两个网络联合估计输出分布的参数。四个现实世界数据集的实验表明,我们的方法与四个最先进的概率预测方法进行了表演,包括基于变压器的方法和Wavenet,在两点指标(Smape,NRMSE)以及A上大多数情况下的范围指标(定量损失百分位数)集。其次,我们证明我们的方法比基于变压器的方法所需的参数明显更少,这意味着模型可以培训更快,内存要求显着降低,因此降低了部署这些模型的基础架构成本。
translated by 谷歌翻译
Pre-trained language models (LMs) store knowledge in their parameters and can generate informative responses when used in conversational systems. However, LMs suffer from the problem of "hallucination:" they may generate plausible-looking statements that are irrelevant or factually incorrect. To address this problem, we propose a contrastive learning scheme, named MixCL. A novel mixed contrastive objective is proposed to explicitly optimize the implicit knowledge elicitation process of LMs, and thus reduce their hallucination in conversations. We also examine negative sampling strategies of retrieved hard negatives and model-generated negatives. We conduct experiments on Wizard-of-Wikipedia, a public, open-domain knowledge-grounded dialogue benchmark, and assess the effectiveness of MixCL. MixCL effectively reduces the hallucination of LMs in conversations and achieves the highest performance among LM-based dialogue agents in terms of relevancy and factuality. We show that MixCL achieves comparable performance to state-of-the-art KB-based approaches while enjoying notable advantages in terms of efficiency and scalability.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Existing natural language understanding (NLU) models often rely on dataset biases rather than intended task-relevant features to achieve high performance on specific datasets. As a result, these models perform poorly on datasets outside the training distribution. Some recent studies address the above issue by reducing the weights of biased samples during the training process. However, these methods still encode biased latent features in representations and neglect the dynamic nature of bias, which hinders model prediction. We propose an NLU debiasing method, named debiasing contrastive learning (DCT), to simultaneously alleviate the above problems based on contrastive learning. We devise a debiasing positive sampling strategy to mitigate biased latent features by selecting the least similar biased positive samples. We also propose a dynamic negative sampling strategy to capture the dynamic influence of biases by employing a bias-only model to dynamically select the most similar biased negative samples. We conduct experiments on three NLU benchmark datasets. Experimental results show that DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance. We also verify that DCT can reduce biased latent features from the model's representations.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
避免在监督学习中过度拟合的一种常见方法是尽早停止,在训练期间,将持有的设置用于迭代评估,以在训练步骤数量中找到最大概括的训练步骤。但是,这样的方法需要一个不相交的验证集,因此通常为此目的遗漏了训练集的标记数据的一部分,当训练数据稀缺时,这并不理想。此外,当训练标签嘈杂时,模型在验证集中的性能可能不是准确的概括代理。在本文中,我们提出了一种方法,可以在训练迭代中发现早期停止点而无需进行验证集。我们首先表明,在过度参数化的方向上,线性模型的随机初始化权重在训练过程中收敛到同一方向。使用此结果,我们建议训练用不同随机种子初始初始化的线性模型的两个平行实例,并使用它们的交点作为信号来检测过度拟合。为了检测相交,我们在训练迭代过程中使用平行模型的重量之间的余弦距离。注意到NN的最后一层是输出逻辑的前层层激活的线性图,我们使用反事实权重的新概念来建立线性模型的标准,并提出向多层网络的扩展。我们对两个领域进行实验,这些领域的早期停止对防止过度拟合NN具有明显的影响:(i)从嘈杂的标签中学习; (ii)学习在IR中排名。我们在四个广泛使用的数据集上进行的实验证实了我们的概括方法的有效性。对于广泛的学习率,我们的方法称为余弦距离标准(CDC),比我们几乎在所有测试的情况下与所有方法相比的所有方法平均得出更好的概括。
translated by 谷歌翻译
当使用临床医生或人工智能(AI)系统的医学图像进行诊断时,重要的是图像具有高质量。当图像质量低时,产生图像的体检通常需要重做。在远程医疗中,一个普遍的问题是,只有在患者离开诊所后才标记质量问题,这意味着他们必须返回才能重做考试。对于居住在偏远地区的人们来说,这可能是特别困难的,他们在巴西的数字医疗组织Portemedicina占了大部分患者。在本文中,我们报告了有关(i)实时标记和解释低质量医学图像的AI系统的正在进行的工作,(ii)采访研究,以了解使用AI系统的利益相关者的解释需求在OurCompany和(iii)纵向用户研究设计,旨在检查包括对我们诊所中技术人员工作流程的解释的效果。据我们所知,这将是评估XAI方法对最终用户的影响的首次纵向研究 - 使用AI系统但没有AI特定专业知识的利益相关者。我们欢迎对我们的实验设置的反馈和建议。
translated by 谷歌翻译
学习的推荐系统可能会无意间泄露有关其培训数据的信息,从而导致侵犯隐私行为。我们调查了推荐系统通过成员推理面临的隐私威胁。在这种攻击中,对手旨在推断用户的数据是否用于训练目标推荐人。为了实现这一目标,以前的工作使用了阴影推荐人来为攻击模型得出训练数据,然后通过计算用户历史互动和推荐项目之间的差异向量来预测成员资格。最先进的方法面临两个具有挑战性的问题:(1)由于阴影和目标推荐人之间的差距,攻击模型的培训数据偏见,并且(2)推荐人中的隐藏状态没有观察到,导致估计不准确差矢量。为了解决上述局限性,我们提出了针对推荐系统(DL-MIA)框架的成员推理攻击的偏见学习,该框架具有四个主要组件:(1)差异向量生成器,(2)分发式编码器,(3)重量估算器和(4)攻击模型。为了减轻推荐人之间的差距,设计了基于变异的自动编码器(VAE)的分解编码器,以识别推荐人不变和特定功能。为了减少估计偏差,我们设计了一个权重估计器,为每个差异向量分配了真实级别的得分,以指示估计精度。我们对三个现实世界数据集的一般推荐人和顺序推荐人评估了DL-MIA。实验结果表明,DL-MIA有效地减轻了同时减轻培训和估计的偏见,并实现了最先进的攻击性能。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
以任务为导向的对话系统(TDSS)主要在离线设置或人类评估中评估。评估通常仅限于单转或非常耗时。作为替代方案,模拟用户行为的用户模拟器使我们能够考虑一组广泛的用户目标,以生成类似人类的对话以进行模拟评估。使用现有的用户模拟器来评估TDSS是具有挑战性的,因为用户模拟器主要旨在优化TDSS的对话策略,并且评估功能有限。此外,对用户模拟器的评估是一个开放的挑战。在这项工作中,我们提出了一个用于端到端TDS评估的隐喻用户模拟器,如果它在与系统的交互中模拟用户的类似思维,则定义模拟器是隐喻的。我们还提出了一个基于测试人员的评估框架,以生成变体,即具有不同功能的对话系统。我们的用户模拟器构建了一个隐喻的用户模型,该模型通过参考遇到新项目时的先验知识来帮助模拟器进行推理。我们通过检查模拟器与变体之间的模拟相互作用来估计模拟器的质量。我们的实验是使用三个TDS数据集进行的。与基于议程的模拟器和三个数据集上的SEQ2SEQ模型相比,隐喻用户模拟器与手动评估的一致性更好。我们的测试人员框架展示了效率,并且可以更好地概括和可扩展性,因为它可以适用于多个域中的对话和多个任务,例如对话建议和电子商务对话。
translated by 谷歌翻译